14 research outputs found

    Load-Balanced Fractional Repetition Codes

    Full text link
    We introduce load-balanced fractional repetition (LBFR) codes, which are a strengthening of fractional repetition (FR) codes. LBFR codes have the additional property that multiple node failures can be sequentially repaired by downloading no more than one block from any other node. This allows for better use of the network, and can additionally reduce the number of disk reads necessary to repair multiple nodes. We characterize LBFR codes in terms of their adjacency graphs, and use this characterization to present explicit constructions LBFR codes with storage capacity comparable existing FR codes. Surprisingly, in some parameter regimes, our constructions of LBFR codes match the parameters of the best constructions of FR codes

    Weak Compression and (In)security of Rational Proofs of Storage

    Get PDF
    We point out an implicit unproven assumption underlying the security of rational proofs of storage that is related to a concept we call weak randomized compression. This is a class of interactive proofs designed in a security model with a rational prover who is encouraged to store data (possibly in a particular format), as otherwise it either fails verification or does not save any storage costs by deviating (in some cases it may even increase costs by ``wasting the space). Weak randomized compression is a scheme that takes a random seed rr and a compressible string ss and outputs a compression of the concatenation rsr \circ s. Strong compression would compress ss by itself (and store the random seed separately). In the context of a storage protocol, it is plausible that the adversary knows a weak compression that uses its incompressible storage advice as a seed to help compress other useful data it is storing, and yet it does not know a strong compression that would perform just as well. It therefore may be incentivized to deviate from the protocol in order to save space. This would be particularly problematic for proofs of replication, designed to encourage provers to store data in a redundant format, which weak compression would likely destroy. We thus motivate the question of whether weak compression can always be used to efficiently construct strong compression, and find (negatively) that a black-box reduction would imply a universal compression scheme in the random oracle model for all compressible efficiently sampleable sources. Implausibility of universal compression aside, we conclude that constructing this black-box reduction for a class of sources is at least as hard as directly constructing a universal compression scheme for that class

    Sharp Threshold Rates for Random Codes

    Get PDF
    Suppose that ? is a property that may be satisfied by a random code C ? ??. For example, for some p ? (0,1), ? might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R^* is the threshold rate for ? if a random code of rate R^* + ? is very likely to satisfy ?, while a random code of rate R^* - ? is very unlikely to satisfy ?. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood. We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably "symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property ? above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general

    Bounds for List-Decoding and List-Recovery of Random Linear Codes

    Get PDF
    A family of error-correcting codes is list-decodable from error fraction pp if, for every code in the family, the number of codewords in any Hamming ball of fractional radius pp is less than some integer LL that is independent of the code length. It is said to be list-recoverable for input list size \ell if for every sufficiently large subset of codewords (of size LL or more), there is a coordinate where the codewords take more than \ell values. The parameter LL is said to be the "list size" in either case. The capacity, i.e., the largest possible rate for these notions as the list size LL \to \infty, is known to be 1hq(p)1-h_q(p) for list-decoding, and 1logq1-\log_q \ell for list-recovery, where qq is the alphabet size of the code family. In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below, ϵ>0\epsilon > 0 is the gap to capacity). (1) A random linear code of rate 1logq()ϵ1 - \log_q(\ell) - \epsilon requires list size LΩ(1/ϵ)L \ge \ell^{\Omega(1/\epsilon)} for list-recovery from input list size \ell. This is surprisingly in contrast to completely random codes, where L=O(/ϵ)L = O(\ell/\epsilon) suffices w.h.p. (2) A random linear code of rate 1hq(p)ϵ1 - h_q(p) - \epsilon requires list size Lhq(p)/ϵ+0.99L \ge \lfloor h_q(p)/\epsilon+0.99 \rfloor for list-decoding from error fraction pp, when ϵ\epsilon is sufficiently small. (3) A random binary linear code of rate 1h2(p)ϵ1 - h_2(p) - \epsilon is list-decodable from average error fraction pp with list size with Lh2(p)/ϵ+2L \leq \lfloor h_2(p)/\epsilon \rfloor + 2. The second and third results together precisely pin down the list sizes for binary random linear codes for both list-decoding and average-radius list-decoding to three possible values

    On list recovery of high-rate tensor codes

    Get PDF
    We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS’17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is approximately locally list recoverable, as well as globally list recoverable in probabilistic near-linear time. This was used in turn to give the first capacity-achieving list decodable codes with (1) local list decoding algorithms, and with (2) probabilistic near-linear time global list decoding algorithms. This also yielded constant-rate codes approaching the Gilbert-Varshamov bound with probabilistic near-linear time global unique decoding algorithms. In the current work we obtain the following results: 1. The tensor product of an efficient (poly-time) high-rate globally list recoverable code is globally list recoverable in deterministic near-linear time. This yields in turn the first capacity-achieving list decodable codes with deterministic near-linear time global list decoding algorithms. It also gives constant-rate codes approaching the Gilbert-Varshamov bound with deterministic near-linear time global unique decoding algorithms. 2. If the base code is additionally locally correctable, then the tensor product is (genuinely) locally list recoverable. This yields in turn (non-explicit) constant-rate codes approaching the Gilbert- Varshamov bound that are locally correctable with query complexity and running time No(1). This improves over prior work by Gopi et. al. (SODA’17; IEEE Transactions on Information Theory’18) that only gave query complexity N" with rate that is exponentially small in 1/". 3. A nearly-tight combinatorial lower bound on output list size for list recovering high-rate tensor codes. This bound implies in turn a nearly-tight lower bound of N (1/ log logN) on the product of query complexity and output list size for locally list recovering high-rate tensor codes.</p

    Dedekind sums s(a, b) and inversions modulo b

    No full text
    corecore